Forecasting Crowd Work Quality via Multi-dimensional Features of Workers
نویسندگان
چکیده
Modeling changes in individual crowd worker performance over time offers new ways to improve the quality of crowd labels, such as by dynamically routing label annotation tasks to workers more likely to produce reliable labels. Whereas prior crowd annotator models have typically adopted a single generative approach, we formulate a discriminative, flexible feature-based model. This allows us to combine multiple generative models and integrate additional behavioral evidence, enabling better adaptation to temporal variance in worker accuracy. Experiments with a public crowdsourcing data show that our model improves prediction accuracy by 26-36% across workers, enabling 29-47% improved quality of crowd labels to be collected at 17-45% lower cost. Furthermore, we confirm that our proposed model shows significantly accurate prediction than baselines under limited supervision.
منابع مشابه
Crowdsourcing a HIT: Measuring Workers' Pre-Task Interactions on Microtask Markets
The ability to entice and engage crowd workers to participate in human intelligence tasks (HITs) is critical for many human computation systems and large-scale experiments. While various metrics have been devised to measure and improve the quality of worker output via task designs, effective recruitment of crowd workers is often overlooked. To help us gain a better understanding of crowd recrui...
متن کاملPredicting Healthcare Workers’ Work Performance Based on Safety- Ergonomic Features of Medical Gloves at Fars Province Hospitals, Iran, 2021
Background: Healthcare workers’ work performance is an important issue affected by the clinical work environment and equipment. The present study aims to predict healthcare workers’ work performance based on safety-ergonomic features of hands and medical gloves. Materials and Methods: This cross-sectional study was conducted on healthcare workers at the hospitals of Shiraz University of Medica...
متن کاملMulti-Objective Crowd Worker Selection in Crowdsourced Testing
Crowdsourced testing is an emerging trend in software testing, which relies on crowd workers to accomplish test tasks. Typically, a crowdsourced testing task aims to detect as many bugs as possible within a limited budget. For a specific test task, not all crowd workers are qualified to perform it, and different test tasks require crowd workers to have different experiences, domain knowledge, e...
متن کاملEffect of trapping questions on the reliability of speech quality judgments in a crowdsourcing paradigm
This paper reports on a crowdsourcing study investigating the influence of trapping questions on the reliability of the collected data. The crowd workers were asked to provide quality ratings for speech samples from a standard database. In addition, they were presented with different types of trapping questions, which were designed based on previous research. The ratings obtained from the crowd...
متن کاملBeyond AMT: An Analysis of Crowd Work Platforms
While Amazon’s Mechanical Turk (AMT) helped launch the paid crowd work industry eight years ago, many new vendors now offer a range of alternative models. Despite this, little crowd work research has explored other platforms. Such near-exclusive focus risks letting AMT’s particular vagaries and limitations overly shape our understanding of crowd work and the research questions and directions be...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015